法医分析取决于从操纵图像识别隐藏迹线。由于它们无法处理功能衰减和依赖主导空间特征,传统的神经网络失败。在这项工作中,我们提出了一种新颖的门控语言注意力网络(GCA-NET),用于全球背景学习的非本地关注块。另外,我们利用所通用的注意机制结合密集的解码器网络,以引导在解码阶段期间的相关特征的流动,允许精确定位。所提出的注意力框架允许网络通过过滤粗糙度来专注于相关区域。此外,通过利用多尺度特征融合和有效的学习策略,GCA-Net可以更好地处理操纵区域的比例变化。我们表明,我们的方法在多个基准数据集中平均优于最先进的网络,平均为4.2%-5.4%AUC。最后,我们还开展了广泛的消融实验,以展示该方法对图像取证的鲁棒性。
translated by 谷歌翻译
在本文中,我们展示了HS-BAN,Bangla语言的二进制类仇恨语音(HS)数据集组成,包括超过50,000名标签评论,其中包括40.17%的仇恨和休息是非仇恨的。在准备DataSet时,遵循严格和详细的注释指南,以减少人类注释偏见。 HS DataSet还预处理了语言上,以提取不同类型的俚语,目前人们使用符号,首字母缩略词或替代拼写来编写。这些俚语被进一步分为传统和非传统俚语列表,并包含在本文的结果中。我们探讨了传统的语言特征和基于神经网络的方法,为Bangla语言开发仇恨语音检测的基准系统。我们的实验结果表明,现有的单词嵌入模型培训的型号训练,而不是用正式文本接受培训的模型。我们的基准测试显示,FastText非正式单词嵌入顶部的BI-LSTM模型实现了86.78%F1分数。我们将使数据集提供可供公共使用。
translated by 谷歌翻译
由于计算机视觉的最新进展,流量视频数据已成为限制交通拥堵状况的关键因素。这项工作为使用颜色编码方案提供了一种独特的技术,用于在深度卷积神经网络中训练流量数据之前。首先,将视频数据转换为图像数据集。然后,使用您只看一次算法进行车辆检测。已经采用了颜色编码的方案将图像数据集转换为二进制图像数据集。这些二进制图像被馈送到深度卷积神经网络中。使用UCSD数据集,我们获得了98.2%的分类精度。
translated by 谷歌翻译
神经代码智能(CI)模型是不透明的黑盒,几乎没有关于他们在预测中使用的功能的见解。这种不透明度可能会导致他们的预测不信任,并阻碍其在安全至关重要的应用中的广泛采用。最近,已经提出了输入程序减少技术来识别输入程序中的关键功能,以提高CI模型的透明度。但是,这种方法是语法 - 乌纳威,不考虑编程语言的语法。在本文中,我们采用了语法引导的减少技术,该技术在减少过程中考虑了输入程序的语法。我们对不同类型输入程序的多个模型进行的实验表明,语法引导的减少技术更快,并且在简化程序中提供了较小的关键令牌集。我们还表明,关键令牌可用于生成对抗性示例,最多可用于65%的输入程序。
translated by 谷歌翻译
在神经模型的输入向量中编码源代码有几种方法。这些方法尝试在其编码中包含输入程序的各种句法和语义特征。在本文中,我们调查Code2Snapshot,这是基于输入程序的快照的源代码的新颖表示。我们评估此表示的几种变体,并将其与利用输入程序的丰富的句法和语义特征的最先进的表示性能进行比较。我们对代码2SNAPSHOT在代码摘要任务中的实用性的初步研究表明,输入程序的简单快照对最先进的表示具有可比性。有趣的是,模糊的输入程序对Code2sNapshot性能的影响微不足道,这表明,对于一些任务,神经模型可以通过仅仅依赖于输入程序的结构来提供高性能。
translated by 谷歌翻译
深层神经网络(DNN)越来越多地用于软件工程和代码智能任务。这些是强大的工具,能够通过数百万参数从大型数据集中学习高度概括的模式。同时,它们的大容量可以使他们容易记住数据点。最近的工作表明,当训练数据集嘈杂,涉及许多模棱两可或可疑的样本时,记忆风险特别强烈表现出来,而记忆是唯一的追索权。本文的目的是评估和比较神经代码智能模型中的记忆和概括程度。它旨在提供有关记忆如何影响神经模型在代码智能系统中的学习行为的见解。为了观察模型中的记忆程度,我们为原始训练数据集增加了随机噪声,并使用各种指标来量化噪声对训练和测试各个方面的影响。我们根据Java,Python和Ruby Codebase评估了几种最先进的神经代码智能模型和基准。我们的结果突出了重要的风险:数百万可训练的参数允许神经网络记住任何包括嘈杂数据,并提供错误的概括感。我们观察到所有模型都表现出某些形式的记忆。在大多数代码智能任务中,这可能会很麻烦,因为它们依赖于相当容易发生噪声和重复性数据源,例如GitHub的代码。据我们所知,我们提供了第一个研究,以量化软件工程和代码智能系统领域的记忆效应。这项工作提高了人们的意识,并为训练神经模型的重要问题提供了新的见解,这些问题通常被软件工程研究人员忽略。
translated by 谷歌翻译
Performance metrics-driven context caching has a profound impact on throughput and response time in distributed context management systems for real-time context queries. This paper proposes a reinforcement learning based approach to adaptively cache context with the objective of minimizing the cost incurred by context management systems in responding to context queries. Our novel algorithms enable context queries and sub-queries to reuse and repurpose cached context in an efficient manner. This approach is distinctive to traditional data caching approaches by three main features. First, we make selective context cache admissions using no prior knowledge of the context, or the context query load. Secondly, we develop and incorporate innovative heuristic models to calculate expected performance of caching an item when making the decisions. Thirdly, our strategy defines a time-aware continuous cache action space. We present two reinforcement learning agents, a value function estimating actor-critic agent and a policy search agent using deep deterministic policy gradient method. The paper also proposes adaptive policies such as eviction and cache memory scaling to complement our objective. Our method is evaluated using a synthetically generated load of context sub-queries and a synthetic data set inspired from real world data and query samples. We further investigate optimal adaptive caching configurations under different settings. This paper presents, compares, and discusses our findings that the proposed selective caching methods reach short- and long-term cost- and performance-efficiency. The paper demonstrates that the proposed methods outperform other modes of context management such as redirector mode, and database mode, and cache all policy by up to 60% in cost efficiency.
translated by 谷歌翻译
It does not matter whether it is a job interview with Tech Giants, Wall Street firms, or a small startup; all candidates want to demonstrate their best selves or even present themselves better than they really are. Meanwhile, recruiters want to know the candidates' authentic selves and detect soft skills that prove an expert candidate would be a great fit in any company. Recruiters worldwide usually struggle to find employees with the highest level of these skills. Digital footprints can assist recruiters in this process by providing candidates' unique set of online activities, while social media delivers one of the largest digital footprints to track people. In this study, for the first time, we show that a wide range of behavioral competencies consisting of 16 in-demand soft skills can be automatically predicted from Instagram profiles based on the following lists and other quantitative features using machine learning algorithms. We also provide predictions on Big Five personality traits. Models were built based on a sample of 400 Iranian volunteer users who answered an online questionnaire and provided their Instagram usernames which allowed us to crawl the public profiles. We applied several machine learning algorithms to the uniformed data. Deep learning models mostly outperformed by demonstrating 70% and 69% average Accuracy in two-level and three-level classifications respectively. Creating a large pool of people with the highest level of soft skills, and making more accurate evaluations of job candidates is possible with the application of AI on social media user-generated data.
translated by 谷歌翻译
Vision transformers (ViTs) are quickly becoming the de-facto architecture for computer vision, yet we understand very little about why they work and what they learn. While existing studies visually analyze the mechanisms of convolutional neural networks, an analogous exploration of ViTs remains challenging. In this paper, we first address the obstacles to performing visualizations on ViTs. Assisted by these solutions, we observe that neurons in ViTs trained with language model supervision (e.g., CLIP) are activated by semantic concepts rather than visual features. We also explore the underlying differences between ViTs and CNNs, and we find that transformers detect image background features, just like their convolutional counterparts, but their predictions depend far less on high-frequency information. On the other hand, both architecture types behave similarly in the way features progress from abstract patterns in early layers to concrete objects in late layers. In addition, we show that ViTs maintain spatial information in all layers except the final layer. In contrast to previous works, we show that the last layer most likely discards the spatial information and behaves as a learned global pooling operation. Finally, we conduct large-scale visualizations on a wide range of ViT variants, including DeiT, CoaT, ConViT, PiT, Swin, and Twin, to validate the effectiveness of our method.
translated by 谷歌翻译
Machine learning algorithms have revolutionized different fields, including natural language processing, computer vision, signal processing, and medical data processing. Despite the excellent capabilities of machine learning algorithms in various tasks and areas, the performance of these models mainly deteriorates when there is a shift in the test and training data distributions. This gap occurs due to the violation of the fundamental assumption that the training and test data are independent and identically distributed (i.i.d). In real-world scenarios where collecting data from all possible domains for training is costly and even impossible, the i.i.d assumption can hardly be satisfied. The problem is even more severe in the case of medical images and signals because it requires either expensive equipment or a meticulous experimentation setup to collect data, even for a single domain. Additionally, the decrease in performance may have severe consequences in the analysis of medical records. As a result of such problems, the ability to generalize and adapt under distribution shifts (domain generalization (DG) and domain adaptation (DA)) is essential for the analysis of medical data. This paper provides the first systematic review of DG and DA on functional brain signals to fill the gap of the absence of a comprehensive study in this era. We provide detailed explanations and categorizations of datasets, approaches, and architectures used in DG and DA on functional brain images. We further address the attention-worthy future tracks in this field.
translated by 谷歌翻译